x86/memshr: fix preemption in relinquish_shared_pages()
authorJan Beulich <jbeulich@suse.com>
Tue, 17 Dec 2013 15:39:39 +0000 (16:39 +0100)
committerJan Beulich <jbeulich@suse.com>
Tue, 17 Dec 2013 15:39:39 +0000 (16:39 +0100)
commita0070f7a5ad8652c74c685a0ee5f10215402279d
treebca9ce55646af23ef8fc8f129b31a8ed47d7e866
parent0725f326358cbb2ba7f9626976e346b963d74c37
x86/memshr: fix preemption in relinquish_shared_pages()

For one, should hypercall_preempt_check() return false the first time
it gets called, it would never have got called again (because count,
being checked for equality, didn't get reset to zero).

And then, if there were a huge range of unshared pages, with count not
getting incremented at all in that case there would also not be any
preemption.

Fix this by using a biased increment (ratio 1:16 for unshared vs shared
pages), and flushing the count to zero in case of a "false" return from
hypercall_preempt_check().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
xen/arch/x86/mm/mem_sharing.c